Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
对于与行人一起运行的移动机器人,对地面基础设施(例如道路和街道交叉路口)进行了牢固的分类。尽管许多语义分割数据集可用于自动驾驶汽车,但在此类数据集中训练的模型在部署在行人空间中的机器人上时表现出较大的域间隙。从行人角度录制的手动注释图像既昂贵又耗时。为了克服这一挑战,我们提出了TrackletMapper,这是一个注释地面类型的框架,例如人行道,道路和街道交叉点,而无需进行人类注销的数据。为此,我们将机器人自我trajectory和其他交通参与者的路径投射到自我视图相机图像中,为多种类型的接地表面创建稀疏的语义注释,从中可以从中训练地面分段模型。我们进一步表明,该模型可以通过汇总地面图并将其投影到相机图像中,从而自行启动,从而获得额外的性能优势,从而与稀疏的踪迹注释相比,创建了一组密集的训练注释。我们在定性和定量上证明了我们在一个新型的大型数据集上,用于在行人区域运营的移动机器人。代码和数据集将在http://trackletmapper.cs.uni-freiburg.de上提供。
translated by 谷歌翻译
像人类一样自然而然地处理和保留新信息的能力是在训练神经网络时受到极大追捧的壮举。不幸的是,传统优化算法通常需要在培训时间和更新WRT期间可用的大量数据。培训过程完成后,新数据很难。实际上,当出现新数据或任务时,由于神经网络容易遭受灾难性遗忘,因此可能会丢失先前的进展。灾难性遗忘描述了当神经网络在获得新信息时完全忘记以前的知识时,这种现象。我们提出了一种新颖的培训算法,称为培训,通过解释我们利用层面相关性传播的方式,以保留神经网络在培训新数据时已经在先前任务中学习的信息。该方法在一系列基准数据集以及更复杂的数据上进行评估。我们的方法不仅成功地保留了神经网络中旧任务的知识,而且比其他最先进的解决方案更有效地进行了资源。
translated by 谷歌翻译
背景:虽然卷积神经网络(CNN)实现了检测基于磁共振成像(MRI)扫描的阿尔茨海默病(AD)痴呆的高诊断准确性,但它们尚未应用于临床常规。这是一个重要原因是缺乏模型可理解性。最近开发的用于导出CNN相关性图的可视化方法可能有助于填补这种差距。我们调查了具有更高准确性的模型还依赖于先前知识预定义的判别脑区域。方法:我们培训了CNN,用于检测痴呆症和Amnestic认知障碍(MCI)患者的N = 663 T1加权MRI扫描的AD,并通过交叉验证和三个独立样本验证模型的准确性= 1655例。我们评估了相关评分和海马体积的关联,以验证这种方法的临床效用。为了提高模型可理解性,我们实现了3D CNN相关性图的交互式可视化。结果:跨三个独立数据集,组分离表现出广告痴呆症与控制的高精度(AUC $ \ GEQUQ $ 0.92)和MCI与控制的中等精度(AUC $ \约0.75美元)。相关性图表明海马萎缩被认为是广告检测的最具信息性因素,其其他皮质和皮质区域中的萎缩额外贡献。海马内的相关评分与海马体积高度相关(Pearson的r $ \大约$ -0.86,p <0.001)。结论:相关性地图突出了我们假设先验的地区的萎缩。这加强了CNN模型的可理解性,这些模型基于扫描和诊断标签以纯粹的数据驱动方式培训。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
The analysis of network structure is essential to many scientific areas, ranging from biology to sociology. As the computational task of clustering these networks into partitions, i.e., solving the community detection problem, is generally NP-hard, heuristic solutions are indispensable. The exploration of expedient heuristics has led to the development of particularly promising approaches in the emerging technology of quantum computing. Motivated by the substantial hardware demands for all established quantum community detection approaches, we introduce a novel QUBO based approach that only needs number-of-nodes many qubits and is represented by a QUBO-matrix as sparse as the input graph's adjacency matrix. The substantial improvement on the sparsity of the QUBO-matrix, which is typically very dense in related work, is achieved through the novel concept of separation-nodes. Instead of assigning every node to a community directly, this approach relies on the identification of a separation-node set, which -- upon its removal from the graph -- yields a set of connected components, representing the core components of the communities. Employing a greedy heuristic to assign the nodes from the separation-node sets to the identified community cores, subsequent experimental results yield a proof of concept. This work hence displays a promising approach to NISQ ready quantum community detection, catalyzing the application of quantum computers for the network structure analysis of large scale, real world problem instances.
translated by 谷歌翻译
The SINDy algorithm has been successfully used to identify the governing equations of dynamical systems from time series data. In this paper, we argue that this makes SINDy a potentially useful tool for causal discovery and that existing tools for causal discovery can be used to dramatically improve the performance of SINDy as tool for robust sparse modeling and system identification. We then demonstrate empirically that augmenting the SINDy algorithm with tools from causal discovery can provides engineers with a tool for learning causally robust governing equations.
translated by 谷歌翻译
The following article presents a memetic algorithm with applying deep reinforcement learning (DRL) for solving practically oriented dual resource constrained flexible job shop scheduling problems (DRC-FJSSP). In recent years, there has been extensive research on DRL techniques, but without considering realistic, flexible and human-centered shopfloors. A research gap can be identified in the context of make-to-order oriented discontinuous manufacturing as it is often represented in medium-size companies with high service levels. From practical industry projects in this domain, we recognize requirements to depict flexible machines, human workers and capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material (BOM) manufacturing, sequence-depended setup times and (partially) automated tasks. On the other hand, intensive research has been done on metaheuristics in the context of DRC-FJSSP. However, there is a lack of suitable and generic scheduling methods that can be holistically applied in sociotechnical production and assembly processes. In this paper, we first formulate an extended DRC-FJSSP induced by the practical requirements mentioned. Then we present our proposed hybrid framework with parallel computing for multicriteria optimization. Through numerical experiments with real-world data, we confirm that the framework generates feasible schedules efficiently and reliably. Utilizing DRL instead of random operations leads to better results and outperforms traditional approaches.
translated by 谷歌翻译
The acquisition of high-quality human annotations through crowdsourcing platforms like Amazon Mechanical Turk (MTurk) is more challenging than expected. The annotation quality might be affected by various aspects like annotation instructions, Human Intelligence Task (HIT) design, and wages paid to annotators, etc. To avoid potentially low-quality annotations which could mislead the evaluation of automatic summarization system outputs, we investigate the recruitment of high-quality MTurk workers via a three-step qualification pipeline. We show that we can successfully filter out bad workers before they carry out the evaluations and obtain high-quality annotations while optimizing the use of resources. This paper can serve as basis for the recruitment of qualified annotators in other challenging annotation tasks.
translated by 谷歌翻译